Goto

Collaborating Authors

 Systems & Languages


QMDP-Net: Deep Learning for Planning under Partial Observability

Neural Information Processing Systems

This paper introduces the QMDP-net, a neural network architecture for planning under partial observability. The QMDP-net combines the strengths of model-free learning and model-based planning. It is a recurrent policy network, but it represents a policy for a parameterized set of tasks by connecting a model with a planning algorithm that solves the model, thus embedding the solution structure of planning in a network learning architecture. The QMDP-net is fully differentiable and allows for end-to-end training. We train a QMDP-net on different tasks so that it can generalize to new ones in the parameterized task set and "transfer" to other similar tasks beyond the set. In preliminary experiments, QMDP-net showed strong performance on several robotic tasks in simulation. Interestingly, while QMDP-net encodes the QMDP algorithm, it sometimes outperforms the QMDP algorithm in the experiments, as a result of end-to-end learning.


Stochastic Variational Deep Kernel Learning

Neural Information Processing Systems

Deep kernel learning combines the non-parametric flexibility of kernel methods with the inductive biases of deep learning architectures. We propose a novel deep kernel learning model and stochastic variational inference procedure which generalizes deep kernel learning approaches to enable classification, multi-task learning, additive covariance structures, and stochastic gradient training. Specifically, we apply additive base kernels to subsets of output features from deep neural architectures, and jointly learn the parameters of the base kernels and deep network through a Gaussian process marginal likelihood objective. Within this framework, we derive an efficient form of stochastic variational inference which leverages local kernel interpolation, inducing points, and structure exploiting algebra. We show improved performance over stand alone deep networks, SVMs, and state of the art scalable Gaussian processes on several classification benchmarks, including an airline delay dataset containing 6 million training points, CIFAR, and ImageNet.


Neural Architecture Optimization

Neural Information Processing Systems

Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space.





A Related Work Neural Architecture Search (NAS) was introduced to ease the process of manually designing complex

Neural Information Processing Systems

However, existing MP-NAS methods face architectural limitations. These limitations hinder MP-NAS usage in SOT A search spaces, leaving the challenge of swiftly designing effective large models unresolved. Accuracy is the result of the network training on ImageNet for 200 epochs. An accuracy prediction model that operates without FLOPs information. Table 2 illustrates the outcomes of these models.




Appendix for Multi-task Graph Neural Architecture Search with Task-aware Collaboration and Curriculum

Neural Information Processing Systems

An operation w Model weight α The architecture parameter N The number of chunks θ The trainable parameter in the soft task-collaborative module p The parameter generated by Eq.(9) p The parameter generated by Eq.(11), replacing p during curriculum training δ The parameter to control graph structure diversity γ The parameter to control task-wise curriculum training BNRist is the abbreviation of Beijing National Research Center for Information Science and Technology. Here we provide the detailed derivation process of Eq.(10). For the other datasets, we use the task-separate head. The experiment results on OGBG datasets are shown in Table 5. From the table, our method can outperform all the multi-task NAS baselines in the three datasets.